Vickrey-Clarke-Groves (VCG) mechanisms are often used to allocate tasks toselfish and rational agents. VCG mechanisms are incentive compatible, directmechanisms that are efficient (i.e., maximise social utility) and individuallyrational (i.e., agents prefer to join rather than opt out). However, animportant assumption of these mechanisms is that the agents will "always"successfully complete their allocated tasks. Clearly, this assumption isunrealistic in many real-world applications, where agents can, and often do,fail in their endeavours. Moreover, whether an agent is deemed to have failedmay be perceived differently by different agents. Such subjective perceptionsabout an agents probability of succeeding at a given task are often capturedand reasoned about using the notion of "trust". Given this background, in thispaper we investigate the design of novel mechanisms that take into account thetrust between agents when allocating tasks. Specifically, we develop a new class of mechanisms, called "trust-basedmechanisms", that can take into account multiple subjective measures of theprobability of an agent succeeding at a given task and produce allocations thatmaximise social utility, whilst ensuring that no agent obtains a negativeutility. We then show that such mechanisms pose a challenging new combinatorialoptimisation problem (that is NP-complete), devise a novel representation forsolving the problem, and develop an effective integer programming solution(that can solve instances with about 2x10^5 possible allocations in 40seconds).
展开▼